脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
本文涉及分割中的伪标记。我们的贡献是四倍。首先,我们提出了伪标签的新表述,作为一种预期最大化(EM)算法,用于清晰的统计解释。其次,我们纯粹基于原始伪标记,即Segpl,提出了一种半监督的医学图像分割方法。我们证明,SEGPL是针对针对2D多级MRI MRI脑肿瘤分段任务和3D二进制CT肺部肺血管分段任务的半监督分割的最新一致性正则方法的竞争方法。与先前方法相比,SEGPL的简单性允许更少的计算成本。第三,我们证明了SEGPL的有效性可能源于其稳健性抵抗分布噪声和对抗性攻击。最后,在EM框架下,我们通过变异推理引入了SEGPL的概率概括,该推论学习了训练期间伪标记的动态阈值。我们表明,具有变异推理的SEGPL可以通过金标准方法深度集合在同步时执行不确定性估计。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Topic modeling is widely used for analytically evaluating large collections of textual data. One of the most popular topic techniques is Latent Dirichlet Allocation (LDA), which is flexible and adaptive, but not optimal for e.g. short texts from various domains. We explore how the state-of-the-art BERTopic algorithm performs on short multi-domain text and find that it generalizes better than LDA in terms of topic coherence and diversity. We further analyze the performance of the HDBSCAN clustering algorithm utilized by BERTopic and find that it classifies a majority of the documents as outliers. This crucial, yet overseen problem excludes too many documents from further analysis. When we replace HDBSCAN with k-Means, we achieve similar performance, but without outliers.
translated by 谷歌翻译
The need for data privacy and security -- enforced through increasingly strict data protection regulations -- renders the use of healthcare data for machine learning difficult. In particular, the transfer of data between different hospitals is often not permissible and thus cross-site pooling of data not an option. The Personal Health Train (PHT) paradigm proposed within the GO-FAIR initiative implements an 'algorithm to the data' paradigm that ensures that distributed data can be accessed for analysis without transferring any sensitive data. We present PHT-meDIC, a productively deployed open-source implementation of the PHT concept. Containerization allows us to easily deploy even complex data analysis pipelines (e.g, genomics, image analysis) across multiple sites in a secure and scalable manner. We discuss the underlying technological concepts, security models, and governance processes. The implementation has been successfully applied to distributed analyses of large-scale data, including applications of deep neural networks to medical image data.
translated by 谷歌翻译
无放射治疗器官轮廓的深度学习模型是临床用途,但目前,预测轮廓的自动化质量评估(QA)有很多工具。使用贝叶斯模型及其相关的不确定性,可以自动化检测不准确预测的过程。我们使用定量测量 - 预期的校准误差(ECE)和基于定性的测量区域的精确度(R-AVU)图来调查两个贝叶斯模型进行自动轮廓众所周知,模型应该具有低欧洲欧洲经委会被认为是值得信赖的。然而,在QA语境中,模型也应该在不准确的区域中具有高不确定性,并且在准确的区域中的不确定性低。此类行为可以直接对专家用户的视觉关注潜在地不准确的地区,导致QA过程中的加速。使用R-AVU图表,我们定性地比较了不同模型的行为准确和不准确的地区。使用三种型号在Miccai2015头和颈部分割挑战和DeepMindtcia CT数据集上进行实验:丢弃骰子,辍学-CE(交叉熵)和Flipout-Ce。定量结果表明,丢弃骰子具有最高的ECE,而辍学-CE和FLIPOUT-CE具有最低的ECE。为了更好地了解辍学-CE和Flipout-CE之间的差异,我们使用R-AVU图表,显示Flipout-CE在不准确的地区具有比Dropout-Ce更好的不确定性覆盖率。定量和定性度量的这种组合探讨了一种新方法,有助于选择哪种模型可以在临床环境中作为QA工具部署。
translated by 谷歌翻译
放射线学使用定量医学成像特征来预测临床结果。目前,在新的临床应用中,必须通过启发式试验和纠正过程手动完成各种可用选项的最佳放射组方法。在这项研究中,我们提出了一个框架,以自动优化每个应用程序的放射线工作流程的构建。为此,我们将放射线学作为模块化工作流程,并为每个组件包含大量的常见算法。为了优化每个应用程序的工作流程,我们使用随机搜索和结合使用自动化机器学习。我们在十二个不同的临床应用中评估我们的方法,从而在曲线下导致以下区域:1)脂肪肉瘤(0.83); 2)脱粘型纤维瘤病(0.82); 3)原发性肝肿瘤(0.80); 4)胃肠道肿瘤(0.77); 5)结直肠肝转移(0.61); 6)黑色素瘤转移(0.45); 7)肝细胞癌(0.75); 8)肠系膜纤维化(0.80); 9)前列腺癌(0.72); 10)神经胶质瘤(0.71); 11)阿尔茨海默氏病(0.87);和12)头颈癌(0.84)。我们表明,我们的框架具有比较人类专家的竞争性能,优于放射线基线,并且表现相似或优于贝叶斯优化和更高级的合奏方法。最后,我们的方法完全自动优化了放射线工作流的构建,从而简化了在新应用程序中对放射线生物标志物的搜索。为了促进可重复性和未来的研究,我们公开发布了六个数据集,框架的软件实施以及重现这项研究的代码。
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
Due to the environmental impacts caused by the construction industry, repurposing existing buildings and making them more energy-efficient has become a high-priority issue. However, a legitimate concern of land developers is associated with the buildings' state of conservation. For that reason, infrared thermography has been used as a powerful tool to characterize these buildings' state of conservation by detecting pathologies, such as cracks and humidity. Thermal cameras detect the radiation emitted by any material and translate it into temperature-color-coded images. Abnormal temperature changes may indicate the presence of pathologies, however, reading thermal images might not be quite simple. This research project aims to combine infrared thermography and machine learning (ML) to help stakeholders determine the viability of reusing existing buildings by identifying their pathologies and defects more efficiently and accurately. In this particular phase of this research project, we've used an image classification machine learning model of Convolutional Neural Networks (DCNN) to differentiate three levels of cracks in one particular building. The model's accuracy was compared between the MSX and thermal images acquired from two distinct thermal cameras and fused images (formed through multisource information) to test the influence of the input data and network on the detection results.
translated by 谷歌翻译
The advances in Artificial Intelligence are creating new opportunities to improve lives of people around the world, from business to healthcare, from lifestyle to education. For example, some systems profile the users using their demographic and behavioral characteristics to make certain domain-specific predictions. Often, such predictions impact the life of the user directly or indirectly (e.g., loan disbursement, determining insurance coverage, shortlisting applications, etc.). As a result, the concerns over such AI-enabled systems are also increasing. To address these concerns, such systems are mandated to be responsible i.e., transparent, fair, and explainable to developers and end-users. In this paper, we present ComplAI, a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model behavior in drift scenarios, and to provide a single Trust Factor that evaluates different supervised Machine Learning models not just from their ability to make correct predictions but from overall responsibility perspective. The framework helps users to (a) connect their models and enable explanations, (b) assess and visualize different aspects of the model, such as robustness, drift susceptibility, and fairness, and (c) compare different models (from different model families or obtained through different hyperparameter settings) from an overall perspective thereby facilitating actionable recourse for improvement of the models. It is model agnostic and works with different supervised machine learning scenarios (i.e., Binary Classification, Multi-class Classification, and Regression) and frameworks. It can be seamlessly integrated with any ML life-cycle framework. Thus, this already deployed framework aims to unify critical aspects of Responsible AI systems for regulating the development process of such real systems.
translated by 谷歌翻译